This is a special post for quick takes by Noah Birnbaum. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Why don’t EA chapters exist at very prestigious high schools (e.g., Stuyvesant, Exeter, etc.)?

It seems like a relatively low-cost intervention (especially compared to something like Atlas), and these schools produce unusually strong outcomes. There’s also probably less competition than at universities for building genuinely high-quality intellectual clubs (this could totally be wrong).

Without a strongly supportive faculty member, I feel like you would struggle to make a group last longer than 2 years, since the succession and turnover dynamics of uni groups would be amplified. 

Could still be worthwhile even with the lack of sustainability.

Students for High Impact Charity was a project to support groups at high schools, but it had difficulty getting traction. It's hard to start a group from afar if you're not a student or faculty there.

Seems right, though plausibly there are some EA/EA-adjacent students at some of these schools. 

FWIW I went to the best (or second best lol) high school in Chicago, Northside, and tbh the kids at these top city highschools are of comparable talent to the kids at northwestern, with a higher tail as well. More over everyone has way more time and can actually chew on the ideas of EA. There was a jewish org that sent an adult once a week with food and I pretty much went to all of them even tho i would barely even self identify as jewish because of the free food and somewhere to sit and chat about random stuff while I waited for basketball practice. 

So yes I think it would be highly successful. But I think you would need adult actual staff to come at least every other week (as brian mentioned) and as far as I can tell EA is currently struggling pretty hard w/organizing capacity and it seems to be getting worse (in part because as I have said many times, we don't celebrate organizers enough and we focus the movement too much on intellectualism rather than coordination and organizing). So I kind of doubt there is a ton of capacity for this. But if there is it's a good idea. I'm happy to help you understand how you could implement this at CPS selective enrollment schools if you want to help do it yourself. 

I think it sounds like an exciting idea. In my role funding EA CB work over the years I've seen a few of these clubs, so there's not literally nothing, but it's true that it's much less common than at universities, and I'm not aware of EA groups at these specific high schools.

The answer to many questions of the form "why isn't there an EA group for XYZ" tends to be "no organizer / no one else working to make it happen" and I'm guessing that's the main answer here too.

Here’s a random org/project idea: hire full-time, thoughtful EA/AIS red teamers whose job is to seriously critique parts of the ecosystem — whether that’s the importance of certain interventions, movement culture, or philosophical assumptions. Think engaging with critics or adjacent thinkers (e.g., David Thorstad, Titotal, Tyler Cowen) and translating strong outside critiques into actionable internal feedback.

The key design feature would be incentives: instead of paying for generic criticism, red teamers receive rolling “finder’s fees” for critiques that are judged to be high-quality, good-faith, and decision-relevant (e.g., identifying strategic blind spots, diagnosing vibe shifts that can be corrected, or clarifying philosophical cruxes that affect priorities).

Part of why I think this is important is because I generally think have the intuition that the marginal thoughtful contrarian is often more valuable than the marginal agreer, yet most movement funding and prestige flows toward builders rather than structured internal critics. If that’s true, a standing red-team org — or at least a permanent prize mechanism — could be unusually cost-effective.

There have been episodic versions of this (e.g., red-teaming contests, some longtermist critiquing stuff), but I’m not sure why this should come in waves rather than exist as ongoing infrastructure (org or just some prize pool that's always open for sufficiently good criticisms).

While I like the potential incentive alignment, I suspect finder’s fees are unworkable. It’s much easier to promise impartiality and fairness in a single game as opposed to an iterated one, and I suspect participants relying on the fees for income would become very sensitive to the nuances of previous decisions rather than the ultimate value of their critiques.


Ultimately, I don’t think there are many shortcuts in changing the philosophy of a movement. If something is worth challenging, than people strongly believe it and there will have to be a process of contested diffusion from the outside in. You can encourage this in individual cases, but systemizing it seems difficult.

Dwarkesh (of the famed podcast) recently posted a call for new guest scouts. Given how influential his podcast is likely to be in shaping discourse around transformative AI (among other important things), this seems worth flagging and applying for (at least, for students or early career researchers in bio, AI, history, econ, math, physics, AI that have a few extra hours a week).

The role is remote, pays ~$100/hour, and expects ~5–10 hours/week. He’s looking for people who are deeply plugged into a field (e.g. grad students, postdocs, or practitioners) with high taste. Beyond scouting guests, the role also involves helping assemble curricula so he can rapidly get up to speed before interviews.

More details are in the blog post; link to apply (due Jan 23 at 11:59pm PST).

+1 I would love an EA to be working on this. 

  • Re the new 2024 Rethink Cause Prio survey: "The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]" 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree.
    • This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong.
    • (Charitable interpretation) It's also just the case that EAs tend to have lots of views that they're being contrarian about because they're trying to maximize the the expected value of information (often justified with something like: "usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees").
      • If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for "being correct" reasons. 

Tho lmk if you disagree. 

I think the "most topics" thing is ambiguous. There are some topics on which mainstream experts tend to be correct and some on which they're wrong, and although expertise is valuable on topics experts think about, they might be wrong on most topics central to EA. [1] Do we really wish we deferred to the CEO of PETA on what animal welfare interventions are best? EAs built that field in the last 15 years far beyond what "experts" knew before.

In the real world, assuming we have more than five minutes to think about a question, we shouldn't "defer" to experts or immediately "embrace contrarian views", rather use their expertise and reject it when appropriate. Since this wasn't an option in the poll, my guess is many respondents just wrote how much they like being contrarian, and EAs have to often be contrarian on topics they think about so it came out in favor of contrarianism.

[1] Experts can be wrong because they don't think in probabilities, they have a lack of imagination, there are obvious political incentives to say one thing over another, and probably other reasons, and lots of the central EA questions don't have actual well-developed scientific fields around them, so many of the "experts" aren't people who have thought about similar questions in a truth-seeking way for many years

I agree with Yarrow's anti-'truth-seeking' sentiment here. That phrase seems to primarily serve as an epistemic deflection device indicating 'someone whose views I don't want to take seriously and don't want to justify not taking seriously'.

I agree we shouldn't defer to the CEO of PETA, but CEOs aren't - often by their own admission - subject matter experts so much as people who can move stuff forwards. In my book the set of actual experts is certainly murky, but includes academics, researchers, sometimes forecasters, sometimes technical workers - sometimes CEOs but only in particular cases - anyone who's spent several years researching the subject in question. 

Sometimes, as you say, they don't exist, but in such cases we don't need to worry about deferring to them. When they do, it seems foolish to not to upweight their views relative to our own unless we've done the same, or unless we have very concrete reasons to think they're inept or systemically biased (and perhaps even then).

Yeah, while I think truth-seeking is a real thing I agree it's often hard to judge in practice and vulnerable to being a weasel word.

Basically I have two concerns with deferring to experts. First is that when the world lacks people with true subject matter expertise, whoever has the most prestige--maybe not CEOs but certainly mainstream researchers on slightly related questions-- will be seen as experts and we will need to worry about deferring to them.

Second, because EA topics are selected for being too weird/unpopular to attract mainstream attention/funding, I think a common pattern is that of the best interventions, some are already funded, some are recommended by mainstream experts and remain underfunded, and some are too weird for the mainstream. It's not really possible to find the "too weird" kind without forming an inside view. We can start out deferring to experts, but by the time we've spent enough resources investigating the question that you're at all confident in what to do, the deferral to experts is partially replaced with understanding the research yourself as well as the load-bearing assumptions and biases of the experts. The mainstream experts will always get some weight, but it diminishes as your views start to incorporate their models rather than their views (example that comes to mind is economists on whether AGI will create explosive growth, and how recently good economic models have been developed by EA sources, now including some economists that vary assumptions and justify differences from the mainstream economists' assumptions).

Wish I could give more concrete examples but I'm a bit swamped at work right now.

MrBeast just released a video about “saving 1,000 animals”—a well-intentioned but inefficient intervention (e.g. shooting vaccines at giraffes from a helicopter, relocating wild rhinos before they fight each other to the death, covering bills for people to adopt rescue dogs from shelters, transporting lions via plane, and more). It’s great to see a creator of his scale engaging with animal welfare, but there’s a massive opportunity here to spotlight interventions that are orders of magnitude more impactful.

Given that he’s been in touch with people from GiveDirectly for past videos, does anyone know if there’s a line of contact to him or his team? A single video/mention highlighting effective animal charities—like those recommended by Animal Charity Evaluators (e.g. The Humane League, Faunalytics, Good Food Institute)—could reach tens of millions and (potentially) meaningfully shift public perception toward impact-focused giving for animals.

If anyone’s connected or has thoughts on how to coordinate outreach, this seems like a high-leverage opportunity I really have no idea how this sorta stuff works, but it seemed worth a quick take — feel free to lmk if I’m totally off base here). 

Manifesting Image

Yooo - nice! Seems good and would cost under ~100k. 

Agreed, Noah. For 15 k shrimps helped per $, it would cost 9.60 k$ (= 144*10^6/(15*10^3)).

Yep—Beast Philanthropy actually did an AMA here in the past! My takeaway was that the video comes first, so that your chances of a partnership would greatly increase if you can make it entertaining. This is somewhat in contrast with a lot of EA charities, which are quite boring, but I suspect on the margins you could find something good.

What IMHO worked for GiveDirectly in that video, and for Shrimp Welfare in their public outreach, has been the counterintuitiveness of some of these interventions. Wild animals, cultured meat, shrimp, are more likely to fit in this bucket than corporate campaigns for chickens I reckon.

As Huw says, the video comes first. I think this puts almost anything you'd be excited about off the table. Factory farming is a really aversive topic for people, and people are quite opposed to large scale WAS interventions. The intervention in the video he did make wasn't chosen at random. People like charismatic megafauna.

Probably(?) big news on PEPFAR (title: White House agrees to exempt PEPFAR from cuts): https://thehill.com/homenews/senate/5402273-white-house-accepts-pepfar-exemption/. (Credit to Marginal Revolution for bringing this to my attention) 

Idea for someone with a bit of free time: 

While I don't have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you probably don’t want to decide career moves purely on current headcounts). And remember: perfection is gonna be the enemy of the good here.

Why this matters

Coordination.
It’s surprisingly hard to know who’s in these niches (independent researchers, part-timers, new entrants, maybe donors). A simple list would make it easier to find collaborators, talk to the right people, and avoid duplicated work.

Neglectedness clarity.
A major reason to work on ultra-neglected causes is… neglectedness. But we often have no real headcount, and that may push people into (or out of) fields they wouldn’t otherwise choose. Even technical AI safety numbers are outdated — the last widely cited 80k estimate (2022) was ~200 people, which is clearly very false now. (To their credit, they emphasized the difficulty and tried to update.)

Even rough FTE (full time equivalent) estimates + who’s active in each area would be a huge service for some fields. 

Started something sorta similar about a month ago: https://saul-munn.notion.site/A-collection-of-content-resources-on-digital-minds-AI-welfare-29f667c7aef380949e4efec04b3637e9?pvs=74

One reason a comprehensive version of this would be difficult for insect welfare is that a couple of projects are 'undercover'. Rethink Priorities have guidance on donating to insects, shrimp and wild animals that might be relevant.

Separately, I understand @JordanStone has a pretty comprehensive sense of who's who in space governance, and would be a good person to contact if you're thinking about getting into this field.

Yeah, lists exist for all the people working on space governance from a longtermist perspective, and they tend to list about 10-15 people. I'm like 90% sure I know of everyone working on longtermist space governance, and I'd estimate that there are the equivalent of ~3 people working full time on this. There's not as much undercover work required for space governance, but I don't like to share lists of names publicly without permission.

At the moment, the main hub for space governance is Forethought and most people contact Fin Moorhouse to learn more about space governance as he's the author of the 80K problem profile on space governance and has been publishing work with Forethought on or related to space governance. From there, people tend to get a lay of the land, introductions are made, and newcomers will get a good idea of what people are working on and where they might be able to contribute.

What a wonderful idea! Mayank referred me over to this post, and I think EA at UIUC might have to hop on this project. I'll see about starting something in the next month or so and sharing a link to where I'm compiling things in case anyone else is interested in collaborating on this. Or, it's possible an initiative like it already exists that I'll stumble upon while investigating (though such a thing may well be outdated).

Very random but: 

If anyone is looking for a name for a nuclear risk reduction/ x-risk prevention org, consider (The) Petrov Institute. It's catchy, symbolic, and sounds like it has prestige. 

Unfortunately it also sounds Russian, which has some serious downsides at the moment....

Perhaps this downside could be partly mitigated by expanding the name to make it sound more global or include something Western, for example: Petrov Center for Global Security or Petrov–Perry Institute (in reference to William J. Perry). (Not saying these are the best names.)

For me at least, that implies an institute founded or affiliated with somebody named Petrov, not just inspired by somebody, and it would seem slightly sketchy for it not to be.

Although there is the Alan Turing Institute, Ada Lovelace Institute, Leverhulme Centre, Simon Institute, etc.

Looking back on old 80k podcasts, and this is what I see (lol): 

They're both great episodes, though — relistened to #138 last week :)

Curated and popular this week
Relevant opportunities